Goto

Collaborating Authors

 human society


It's time to prepare for AI personhood Jacy Reese Anthis

The Guardian

'Digital minds will be participants in the social contract that forms the bedrock of human society.' 'Digital minds will be participants in the social contract that forms the bedrock of human society.' It's time to prepare for AI personhood Technological advances will bring social upheaval. How will we treat digital minds, and how will they treat us? L ast month, when OpenAI released its long-awaited chatbot GPT-5, it briefly removed access to a previous chatbot, GPT-4o. Despite the upgrade, users flocked to social media to express confusion, outrage and depression.


Are AI Machines Making Humans Obsolete?

Scheutz, Matthias

arXiv.org Artificial Intelligence

Breakthroughs in technology are bound to happen. If human history has shown anything then that technological advancements are an essential driving force of human culture and that societies that stop to innovate will fall behind. Humans have always been fascinated by machines and attempted to invent ever better ones that could take over more and more tasks that otherwise required human labor, from the early cranes in Mesopotamia, to the power loom, steam engine, all the way to the first industrial robots (such as welding robots in the automotive industry), to modern day aircraft, spacecraft, self-driving cars, and other kinds of autonomous machines. While machines initially were nothing but prostheses, augmenting and extending our own limited actuation capabilities, as they had to be operated by humans, automation introduced self-sufficient machines that replaced human control with artificial, albeit limited control systems that allowed for the performance of simple repeated tasks.


Interview with Gillian Hadfield: Normative infrastructure for AI alignment

AIHub

During the 33rd International Joint Conference on Artificial Intelligence (IJCAI), held in Jeju, I had the opportunity to meet with one of the keynote speakers, Gillian Hadfield. We spoke about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems. Transcript: Note: the transcript has been lightly edited for clarity. This is an interview with Professor Gillian Hadfield who was a keynote speaker at IJCAI 2024. She gave a very insightful talk about normative infrastructures and how they can guide our search for AI alignment. Kumar Kshitij Patel (KKP): Could you talk a bit about your background and career trajectory? I want our readers to understand how much interdisciplinary work you've done over the years. Gillian Hadfield (GH): I did a PhD in economics and a law degree, a JD, at Stanford, originally motivated by wanting to think about the big questions about the world. So I read John Rawls' theory of justice when I was an undergraduate, and those are the big questions: how do we organize the world and just institutions, but I was very interested in using more formal methods and social scientific approaches. That's why I decided to do that joint degree. So, this is in the 1980s, and in the early days of starting to use a lot of game theory. I studied information theory, a student of Canaro and Paul Milgram at the economics department at Stanford. I did work on contract theory, bargaining theory, but I was still very interested in going to law school, not to practice law, but to learn about legal institutions and how those work. I was a member of this emerging area of law and economics early in my career, which of course, was interdisciplinary, using economics to think about law and legal institutions.


The computational power of a human society: a new model of social evolution

Wolpert, David H., Harper, Kyle

arXiv.org Artificial Intelligence

Social evolutionary theory seeks to explain increases in the scale and complexity of human societies, from origins to present. Over the course of the twentieth century, social evolutionary theory largely fell out of favor as a way of investigating human history, just as advances in complex systems science and computer science saw the emergence of powerful new conceptions of complex systems, and in particular new methods of measuring complexity. We propose that these advances in our understanding of complex systems and computer science should be brought to bear on our investigations into human history. To that end, we present a new framework for modeling how human societies co-evolve with their biotic environments, recognizing that both a society and its environment are computers. This leads us to model the dynamics of each of those two systems using the same, new kind of computational machine, which we define here. For simplicity, we construe a society as a set of interacting occupations and technologies. Similarly, under such a model, a biotic environment is a set of interacting distinct ecological and climatic processes. This provides novel ways to characterize social complexity, which we hope will cast new light on the archaeological and historical records. Our framework also provides a natural way to formalize both the energetic (thermodynamic) costs required by a society as it runs, and the ways it can extract thermodynamic resources from the environment in order to pay for those costs -- and perhaps to grow with any left-over resources.


MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions

Yang, Shu, Ali, Muhammad Asif, Yu, Lu, Hu, Lijie, Wang, Di

arXiv.org Artificial Intelligence

The increasing significance of large models and their multi-modal variants in societal information processing has ignited debates on social safety and ethics. However, there exists a paucity of comprehensive analysis for: (i) the interactions between human and artificial intelligence systems, and (ii) understanding and addressing the associated limitations. To bridge this gap, we propose Model Autophagy Analysis (MONAL) for large models' self-consumption explanation. MONAL employs two distinct autophagous loops (referred to as ``self-consumption loops'') to elucidate the suppression of human-generated information in the exchange between human and AI systems. Through comprehensive experiments on diverse datasets, we evaluate the capacities of generated models as both creators and disseminators of information. Our key findings reveal (i) A progressive prevalence of model-generated synthetic information over time within training datasets compared to human-generated information; (ii) The discernible tendency of large models, when acting as information transmitters across multiple iterations, to selectively modify or prioritize specific contents; and (iii) The potential for a reduction in the diversity of socially or human-generated information, leading to bottlenecks in the performance enhancement of large models and confining them to local optima.


#AAAI2024 workshops round-up 1: Cooperative multi-agent systems decision-making and learning

AIHub

A report on the Cooperative Multi-Agent Systems Decision-Making and Learning: From Individual Needs to Swarm Intelligence workshop, which took place at AAAI 2024, on 26 February. With the tremendous growth of AI technology, robotics, IoT, and high-speed wireless sensor networks (like 5G) in recent years, an artificial ecosystem has been formed, termed artificial social systems, that involves AI agents from software entities to hardware devices. How to integrate artificial social systems into human society so that they coexist harmoniously is a critical issue. At this point, rational decision-making and efficient learning from multi-agent systems (MAS) interactions are the preconditions to guarantee multi-agents working safely, balancing the group utilities and system costs in the long term, and satisfying group members' needs in their cooperation. From the cognitive modeling perspective, it may provide a more realistic basis for understanding cooperative multi-agent interactions by embodying realistic constraints, capabilities, and tendencies of individual agents in their interactions, including physical and social environments.


Is There Any Social Principle for LLM-Based Agents?

Bai, Jitao, Zhang, Simiao, Chen, Zhonghao

arXiv.org Artificial Intelligence

"social sciences" for agent community may also be derived. Similarity is established with the human social sciences serving as the baseline. Since there exist inherent differences in the way agents and human act, the "social sciences" for agent society may also be Similar to the common methodology in human social different from that for human society.


(Machine) Learning to Be Like Thee? For Algorithm Education, Not Training

Blazquez, Susana Perez, Hipolito, Inas

arXiv.org Artificial Intelligence

This paper argues that Machine Learning (ML) algorithms must be educated. ML-trained algorithms' moral decisions are ubiquitous in human society. Sometimes reverting the societal advances governments, NGOs and civil society have achieved with great effort in the last decades or are yet on the path to be achieved. While their decisions have an incommensurable impact on human societies, these algorithms are within the least educated agents known (data incomplete, un-inclusive, or biased). ML algorithms are not something separate from our human idiosyncrasy but an enactment of our most implicit prejudices and biases. Some research is devoted to "responsibility assignment" as a strategy to tackle immoral AI behaviour. Yet this paper argues that the solution for AI ethical decision-making resides in algorithm education" (as opposed to the "training") of ML. Drawing from an analogy between ML and child education for social responsibility, the paper offers clear directions for responsible and sustainable AI design, specifically with respect to how to educate algorithms to decide ethically.


Open Letter From AI Leaders: Let's Take A Break For Safety - CleanTechnica

#artificialintelligence

The danger of artificial intelligence is a common theme in science fiction because it allows authors and filmmakers to explore ethical and societal questions that arise when humans develop entities that can rival or surpass their own intelligence. There are various reasons why this keeps popping up. The biggest one is probably loss of control. Human beings fear the idea of losing control over the AI they create, which makes it a common theme of science fiction. This fear has often manifested in movies like "The Terminator," where an AI becomes so intelligent that it sees humanity as a threat to its survival and wages war against humans.


The Agent-based Modelling for Human Behaviour Special Issue

Lim, Soo Ling, Bentley, Peter J.

arXiv.org Artificial Intelligence

If human societies are so complex, then how can we hope to understand them? Artificial Life gives us one answer. The field of Artificial Life comprises a diverse set of introspective studies that largely ask the same questions, albeit from many different perspectives: Why are we here? Who are we? Why do we behave as we do? Starting with the origins of life provides us with fascinating answers to some of these questions. However, some researchers choose to bring their studies closer to the present day. We are after all, human. It has been a few billion years since our ancestors were self-replicating molecules. Thus, more direct studies of ourselves and our human societies can reveal truths that may lead to practical knowledge. The papers in this special issue bring together scientists who choose to perform this kind of research.